A Passive pHRI Controller for Assisting the User in Partially Known Tasks

In this article, a passive physical human–robot interaction (pHRI) controller is proposed to enhance pHRI performance in terms of precision, cognitive load, and user effort, in cases partial knowledge of the task is available. Partial knowledge refers to a subspace of $SE(3)$ determined by the desired task, generally mixing both position and orientation variables, and is mathematically approximated by parametric expressions. The proposed scheme, which utilizes the notion of virtual constraints and the prescribed performance control methodology, is proved to be passive with respect to the interaction force, while guaranteeing constraint satisfaction in all cases. The control scheme is experimentally validated and compared with a dissipative control scheme utilizing a KUKA LWR4+ robot in a master–slave task; experiments also include an application to a robotic assembly case.

cognition, regarding the execution of the task, while the robot reduces the user effort and cognitive load needed to accomplish the task [3] and increases accuracy [4]. In the robotic task programming issue, kinesthetic teaching has been utilized to incrementally learn a task [5], reducing the duration needed for its programming. Moreover, as kinesthetic guidance provides an intuitive interface to the human teacher, allowing the user to navigate the robot with his hands, he is no longer required to possess any skills in robot programming.
As pHRI copes with the actual exchange of forces and physical power between the human and the robot, the primary key issue that needs to be resolved by the robot's controller concerns safety and dependability. Safety is often studied in terms of passive (inherent) and active compliance in the robot control literature. Passive compliance involves specialized hardware and is crucial in high-frequency contacts like collisions with rigid environments, while active compliance can be useful in human-robot interactions, which happen at a slower frequency range. The prominent motion control approach, which provides a robot with a compliant behavior, is impedance control introduced by Hogan [6]. Under impedance control, the robot acts as a mechanical impedance with desired parameters or as a mechanical admittance [7]. Impedance control assumes knowledge of the robot's dynamic model and its environment as well as availability of interaction force measurements to decouple the system [7]. pHRI can be discriminated between two categories, namely, intentional and unintentional, as seen from the side of a human user. Some works focus on the classification of contact between those categories based on interaction forces, e.g., [8], and others on collision reaction strategies [9], [10] according to human safety standards [11].
In intentional pHRI tasks, an important key issue that needs to be addressed by the robot's controller concerns the robot's quality of performance. In pHRI tasks, robot performance concerns the robot's ability to adapt its behavior dynamically according to the task and the human intentions. The reported works on pHRI performance can be distinguished in two categories, those for which the robot does not use any knowledge of the task and those for which the robot exploits partial knowledge of the task, which is assumed to be or made available.
When there is no knowledge about the task, impedance control with zero target stiffness is utilized, to allow the system to be fully compliant, as well as impedance parameter modulation to enhance the performance of the robot during the execution of the 1552-3098 © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
task under human guidance. An adaptive admittance/impedance control scheme, inspired by human studies, is proposed in [12], switching damping between two values according to the endeffector velocity. In [13], the damping value is calculated online to minimize a cost function related to both the value of damping and its rate. In [14], the derivative of the forces is used to adjust damping. In [15], a fuzzy learning method to calculate online the damping is proposed. Although all the above works adjust only the target damping parameter, in [16], the inertia parameter is also adjusted to maintain the bandwidth of the overall system constant. In this direction, some works propose the adjustment of both damping and inertial parameters, according to velocity [17] or acceleration estimates [16]. In other works [18], [19], the parameters of the impedance model are adapted to maintain stability of the coupled human-robot system, based on the detection of undesired oscillations. Moreover, human intention prediction is used either to adapt the target admittance [20] or to reshape the nominal path [21], [22] during pHRI. In [23], an automatic smooth transition between interaction modes is proposed to provide precision and comfort during pHRI in free space and enable force amplification when the robot is in contact with its environment. In many cognitive-demanding pHRI applications (e.g., robotassisted minimally invasive surgery and car assembly), partial knowledge of the task exists [24], [25]. By exploiting this knowledge, the cognitive load of the user could be reduced, resulting in higher accuracy, faster completion, and less fatigue for the user. In the literature, there are many works that exploit partial knowledge of the task to enhance pHRI performance either in the form of a human intension model or in the form of virtual task constraints. In the cooperative lifting [26] or the planar manipulation of a heavy object [27], a high-level knowledge of the task between humans is used. In particular, in [26], a human intention prediction scheme is proposed based on the minimum jerk model, while, in [27], the human motion is modeled as a revolute joint, and the parameters of the motion are estimated online based on force/torque measurements. In [28], an adaptive dynamical system with a stable limit cycle is utilized to model human hand shaking. Task knowledge with respect to its spatial characteristics is exploited in [29] in terms of a predefined desired path, where an admittance controller for fast point-to-point cooperative sliding of an object is proposed, utilizing a Kalman filter to estimate the parameters of the minimum jerk velocity profile. In a cooperative manipulation task addressed in [30], the full knowledge of the task dynamics and kinematics as well as the desired trajectory is assumed to be available; the authors propose a role allocation policy to distribute the effort among the participants, according to interaction force measurements. Knowledge of the task's scene in the form of "virtual fixtures" (VFs), also known as "active constraints" (ACs), are considered in many works, particularly involving robotic surgery applications. ACs were first introduced in telerobotic manipulation [3], [31] and have been utilized in surgical, industrial [25], [32], or even in underwater robotic tasks [33] to enhance operator performance in terms of execution time, precision, and error rates. VFs are placed based on the prior partial knowledge of the task and/or environment and provide the user with haptic cues, relieving, in this way, some of his/her cognitive burden, similarly to the physical assistance given by a ruler while drawing a line [3]. The use of VFs or ACs has also been considered a special application of shared control [34]. Haptic shared control is shown to improve task performance, control effort, and operator cognitive workload in telemanipulation tasks in [35]. A recent review of shared control in cooperative pHRI tasks is given in [36].
A more detailed review of the implementations of VFs in surgical tasks can be found in [37]. The works of [38] and [39] consider a parametric analytic expression of a given reference path and propose anisotropic damping-in an admittance control law-to facilitate the user in a surgery task, discouraging motions close to the constraints. However, the orientation is not taken into consideration. In [40], a target stiffness is introduced in the 6-D end-effector space in addition to damping, having as an equilibrium a desired path and a desired orientation that are not coupled. In [41], a dissipative control (DC) scheme is proposed, considering position-orientation couplings, for complete pose control, employing energy redirection for increased task accuracy. However, this approach does not guarantee constraint satisfaction in all cases. In [42], a control scheme for VF enforcement and adaptation is proposed, which involves parametric expressions of oriented paths and penetrable VFs.
In this article, we consider the case of a generic task, for which partial knowledge is available with respect to a 6-D work subspace of the robot's end-effector, which may include both position and orientation variables. This subspace is mathematically modeled by parametric expressions, reflecting the desired task; parameters may couple position and orientation variables depending on the task. Utilizing the notion of virtual fixtures, we propose a robot control scheme that enhances pHRI performance in terms of precision, cognitive load, and user effort by enforcing virtual constraints around the subspace of SE(3) defined by the partial knowledge of the task. The closed-loop system is proved to be passive with respect to the interaction force, while guaranteeing constraint satisfaction in all cases. Moreover, it is shown experimentally that in a 3-D oriented path task, it achieves better performance as compared to the dissipative controller [41]. The proposed approach is illustrated experimentally in an robotic assembly task of a mobile phone. The proposed controller design draws on the prescribed performance control (PPC) methodology, which was first introduced in [43] and extended to the approximation-free case in [44]- [46]. It has been applied in the design of robot position controllers [47]- [50], in force/position control [51], in contact establishment [52], in constrained visual servoing in [53], as well as to impose constraints during the robot's autonomous operation [54]- [57]. The contribution of the proposed scheme lies: 1) in its generalized formulation, which incorporates a) partial task knowledge even in cases where position and orientation variables are coupled and b) task uncertainty in the degree of parameterization and the maximum allowable deviations of the end-effector pose, thus greatly accelerating the pHRI task; and 2) on the rigorous stability proof of the proposed virtual constraint controller, which includes not only the strictly output passivity of the interaction force with respect to the robot's end-effector velocity, but also the boundedness of all closed-loop signals, which implies the satisfaction of the virtual constraints at all times.
The rest of this article is organized as follows. In Section II, parametric modeling of partially known tasks is described, and some representative examples are given. The problem description and the control objectives are explained in Section III. In Section IV, the calculation of the nearest to desired pose is detailed. In Section V, the control signal is developed, and the proofs of passivity and state boundedness are given. An experimental evaluation of the proposed control scheme is developed in Section VI, followed by a discussion on its significance and limitation at Section VII. Finally, Section VIII concludes this article. Preliminaries and certain proof details are provided in Appendixes A and B.

II. PARAMETRIC MODELING OF PARTIALLY KNOWN TASKS
Consider an N -dof robotic manipulator, whose joint variables are denoted by q ∈ R N . Let x = [p T Q T ] T ∈ T be the generalized pose of the end-effector, with T = R 3 × S 3 , expressed as a combination of the position p and the orientation in the form of unit Q (quaternion preliminaries can be found in Appendix A). The mapping between the generalized velocity v = [ṗ T ω T ] T of the end-effector withṗ and ω being the translational and angular velocities, respectively, andẋ iṡ where J x = diag(I 3 , 1 2 J Q ) ∈ R 7×6 , with J Q being the matrix, mapping the angular velocity of the frame to the unit quaternion rates of Q (see (31) in Appendix A) and v = J(q)q, where J(q) ∈ R 6×N is the robot Jacobian, mapping the velocity of the joints to the end-effector velocity.
The notion of "partially known task" refers to the subset S of the task space T, which includes any potential-oriented path required to complete the task. Subset S depends on the uncertainty involved in this knowledge. Furthermore, the temporal properties of the task are considered unknown.
In many cases, the partially known task can be mathematically modeled by parameterized expressions with m ≤ 6 denoting the minimum number of parameters required to describe the task. Therefore describes the desired end-effector poses, for a given σ. Note that (2) implies, in general, a coupling between position and orientation variables. There have been previous works that utilize the coupling between position and orientation within the scope of learning and autonomously execute synchronized operations, e.g., [58], which refers to asymmetrical bimanual tasks. In this article, we consider hyperrectangles in the parametric space, i.e., σ i , i = 1 . . . m, is bounded σ i < σ i < σ i , with σ i , σ i ∈ R being the lower and upper bounds of σ i , respectively, and define the set of desired end-effector poses as follows: To enhance clarity, in what follows, examples of industrial tasks with their respective parameterization are provided, which differ with respect to the number of dofs and the level of coupling between position and orientation. Example II.1. Surface cutting (1-D curve): Consider a cutting task in which the actual cutting curve is known. In this task, every point on the curve corresponds to a specific orientation of the cutting tool. Hence, both position and orientation can be parameterized utilizing a single parameter σ 1 (i.e., p d (σ 1 ) and Q d (σ 1 )). The limits σ 1 and σ 1 reflect the starting and ending poses of the parameterized curve, respectively. Note that position is fully coupled with orientation in this case, as they both depend on σ 1 .
In particular, this 1-D curve can be given as a sequence of K key frames denoted byx di , with i = 1, . . ., K. It is possible to find an analytic expression x d (σ), in which x d (σ i ) = x di , ∀i = 1, . . ., K. One way to achieve this is by constructing a sequence of Bezier functions x di , where the continuity is preserved in both the path and its tangential vector, in which case the following hold: ∂σ | σ=σ i+1 , ∀i = 1, . . ., K − 1. A Bezier equivalent in orientation could be provided utilizing either the traditional SQUAD method or the Hermite interpolation, which is based on smooth blending of two great circular arcs in SO (3), taking into account the tangent vector in every point of the path on S 3 [59], which is the 3-D sphere, where unit quaternions belong.
Example II.2. Milling along a path on a curved surface: Consider a milling task along a curved path, in which the human should physically guide a robot, which has a milling tool. In this task, every point on the curve corresponds to a specific orientation of the tool, but any orientation around the normal to the cutting surface should be allowed [see Fig. 1(a)], since it does not affect the task. This fact introduces a redundant dof in orientation. Hence, position can be parameterized as p d (σ 1 ) and orientation as with n(σ 1 ) ∈ R 3 being the normal to the surface and σ 2 the angle around n(σ 1 ). Note that, for a given σ 1 , Q 0 (σ 1 ) is one allowable orientation and Q d (σ 1 , σ 2 ) represents a geodesic circle of the unit quaternion sphere, since it is the intersection between the vector space of [Q 0 J Q 0 n(σ 1 )] and the quaternion sphere (see Appendix A). In this case, Q 0 represents an orientation lying on the geodesic circle, with rotation matrix The limits σ 1 and σ 1 reflect the starting and ending poses of the parameterized curve, respectively, and the limits of σ 2 can be selected to be σ 2 = 0 and σ 2 = 1, to allow a full rotation around n(σ 1 ).
Example II.3. Engraving on a 3-D surface generated by a rotation of a planar curve: Let us consider a case of a surface produced by rotating a 1-D planar curve f (σ 1 ) ∈ R 3 around an axis k [see Fig. 1 Hence, the surface of the object can be described as where R(k, σ 2 ) ∈ R 3×3 is the rotation matrix and σ 2 is a rotation angle. To engrave the object's surface, the engraving tool should remain perpendicular to the surface. Therefore, there is a coupling between desired position and orientation that rely on σ 1 and σ 2 . Hence, the desired orientation could be parameterized as Q d (σ 1 , σ 2 , σ 3 ), with σ 3 being a redundant dof representing a rotation around the vector normal to the surface, similarly to Example II.2. The limits of σ 1 reflect the starting and ending points of the surface along f (σ 1 ), those of σ 2 for a surface derived by a full rotation of f (σ 1 ) can be set to σ 2 = −π and σ 2 = π, and those of σ 3 can be set to σ 3 = 0 and σ 3 = 1 to allow a full rotation around the normal.
Example II.4. Tool motion in a unilaterally constrained 3-D space: Consider a task, in which the motion should be constrained so that it does not penetrate a surface, while the orientation is totally free, as shown in Fig. 2(a). Let us a assume that the surface is parameterized as f (σ 1 , σ 2 ) ∈ R 3 . Hence, the desired position can be described as where n is the normal to the surface vector and σ 3 > σ 3 = 0 to constrain the motion above the surface. The desired orientation in this task is parameterized as Q d (σ 4 , σ 5 , σ 6 ), representing the whole S 3 space. The bounds of σ 1 and σ 2 should reflect the boundaries of the surface f (σ 1 , σ 2 ), that of σ 3 should represent the maximum allowable deviation from the surface, and bounds of σ 4 , σ 5 , and σ 6 should be selected such that Q d (σ 4 , σ 5 , σ 6 ) covers the whole S 3 . Example II.5. Rectangular peg in a hole: Consider an industrial task, in which the human guides the robot's end-effector grasping a polyhedral peg, with the purpose of inserting it in a hole, as shown in Fig. 2(b). Perception errors do not allow precise knowledge of the hole's pose in the working surface of the robot.
In that case, the desired parameterized positions for the task can be described as where t 1 , t 2 ∈ R 3 are the plane's coordinate axes. By imposing the bounds σ 1 < σ 1 < σ 1 , σ 2 < σ 2 < σ 2 , and σ 3 < σ 3 < σ 3 , a space including the hole and the space above the upper surface of the plane can be defined [see Fig. 2(b)]. Additionally, the desired orientation can be described as where σ 4 is a redundant dof representing a rotation around the normal vector to the surface and Q 0 is the quaternion that describes the orientation of the frame The limits of σ 4 can be set to σ 4 = 0 and σ 4 = 1, similarly to Example II.2, to allow a full rotation around the axis normal to t 1 and t 2 . Table I summarizes the representative examples with the couplings between position and orientation.

III. PROBLEM DESCRIPTION
When the robot is under human guidance, knowledge of the parameterized desired pose x d (σ) can be exploited by the pHRI controller to assist the user in its task. To solve the problem, we propose to confine the distance of the current end-effector pose to the desired pose within a predefined set, yielding translational and orientation deviations less than prespecified amounts. As the desired pose is a parameterized function (see Section II), the closest desired pose to the current end-effector pose should be determined. Constraining the distance to the closest desired pose is satisfied by the action of a controller, designed to guarantee that the boundaries of the predefined residual set will not even be reached. Acting on a gravity compensated robot, the produced control input should additionally preserve the passivity between the human interaction force and the end-effector velocity and achieve the boundedness of all signals in the closed-loop system. Summarizing, the main problem can be split in the following subproblems.
1) Given the current end-effector pose and the parameterized desired task, find the closest desired pose and calculate the distance of the current to the desired pose. 2) Utilizing the distance of the current to the closest desired pose and a predefined residual set, design a controller to achieve the following objectives: a) the formulated closedloop system is strictly output passive with respect to the end-effector velocity, under the exertion of a generalized force imposed by the human guidance; b) the distance of the current end-effector pose to the closest desired one is constrained to evolve strictly within a predefined (according to constraints) set; and c) all closed-loop signals are bounded. Remark 1: To guarantee the collaboration of the solutions of the two subproblems described above, it is mandatory to operate on different time scales. Specifically, the closest desired pose should be determined significantly faster, compared to the duty cycle of the control loop. This is clearly accomplished whenever an analytic solution is available; in all other cases, it is achievable when the problem is locally convex, since a general optimization algorithm can be utilized to yield a sufficiently accurate local minimum within one control cycle.

IV. CLOSEST DESIRED POSE
In this section, the distance of the current end-effector pose x = [p T Q T ] T ∈ T to the nearest desired pose is defined.
Note that in the case of position, the shortest path between two points is a straight line, while in the case of orientation, the shortest path between two points on the unit quaternion sphere is the minor arc in S 3 , which passes through these points (see Appendix A). Hence, the minor arc is analogous to straight lines in Euclidean geometry.
In that direction, let be the translational error for some σ (i.e., from some position in the parametric task) and be the orientation error for some σ (i.e., from some orientation in the parametric task), which describes the rotation needed to align Q d (σ) with Q, with ϑ e (Q, σ) ∈ (−π, π) and k e (Q, σ) being the angle and unit axis of rotation, respectively, while with " * " the quaternion product (see Appendix A) is denoted. Note that η e = 1 if Q = Q d (σ).
The following metric of the generalized deviation of current pose x from x d (σ) is proposed: where r m and ϑ m are constant parameters representing prespecified (user defined) bounds on translational and orientational deviation, respectively. In particular r m and ϑ m are selected so that r m > r 0 > 0 and π ≥ |ϑ m | > ϑ 0 > 0. A similar metric is proposed in [60]. Note that ψ( corresponds to the minimum deviation ψ(x, σ * ). To simplify notation, ψ(x, σ * ) is from now on denoted as ψ * (x). Our aim is to guarantee where e * p (p) = e p (p, σ * ) and η * e (Q) = η e (Q, σ * ). The latter is equivalent to Therefore, the achievement of (8) imposes prescribed (userdefined) performance bounds on both the translational and orientation errors. Note that when Q = Q d , the allowable maximum translational deviation is r m , and when p = p d , the allowable maximum deviation in orientation is ϑ m . In any other case, the maximum allowable deviation in translation and orientation is less than r m and ϑ m , respectively. In general, there is a coupling between the boundaries in translation and orientation, since ψ consists of a sum of both metrics. Let be the allowable region containing the set of desired end-effector poses S (i.e., S ⊂ Ω 1 ). Since σ is constrained, as explained in Section II and illustrated in the examples, (7) can be formalized as the solution of the constrained optimization problem For any sufficiently smooth task, . ., m, for some α > 0. Note that such a consideration is not restrictive, as any task that does not satisfy this property may equivalently be subdivided into several sufficiently smooth subtasks. Furthermore, we assume that the desired pose x d (σ) is generated before the beginning of the human-robot interaction, so that x d (σ 0 ) = x 0 , for some σ 0 ∈ R m with σ i ≤ σ 0 ≤ σ i , i = 1, . . ., m, with x 0 being the initial end-effector pose, making the initial distance ψ * (x) zero. Hence, a good choice of σ c for each control cycle κ ∈ N is the previously found optimal σ * κ−1 for κ > 0, and σ * 0 = σ 0 for κ = 0. If σ * i,κ − σ * i,κ−1 < α for all κ ∈ N and i = 1, . . ., m, then owing to the local convexity property, the optimal σ * κ can be determined using any convex constrained optimization algorithm. Therefore, it is possible to get fast a sufficiently accurate local minimum within the control cycle as required by our controller (see Remark 1). Hence, ψ * (x) is available to the controller for all t ≥ 0.
( 1 4 ) Differentiating (13) with respect to time, we obtaiṅ where The following control force synthesized in the robot's endeffector space is proposed: (17) where k ∈ R + is a scalar tunable control parameter and D ∈ R 6×6 a positive-definite diagonal matrix, representing the virtual dissipation introduced by the control scheme. Remark 2: Note that by selecting an appropriately small gain k, one can intensify the nonlinearity of term kT (ψ * (x)) in (17), resulting in a more flat region when the error is close to the equilibrium and a more steep region when the error is close to the boundary ψ * (x) ≡ 1. Tuning this gain in such a way that forces generated by the attractive potential in the flat region cannot overcome static friction yields a similar to DC behavior, i.e., the user will not experience any forces when stationary.
Assuming an N -dof nonredundant manipulator, which is compensated for gravity and for which the robot Jacobian J(q) is invertible and the mapping between the joint space and task space is one-to one, the robot model can be written in task space as follows: where Λ x (x) = [J(q)Λ −1 (q)J T (q)] −1 and C x (x, v)v = J −T (q)C(q,q)q − Λ x (x)J(q)q, with Λ(q) ∈ R N ×N being the manipulator's inertia matrix, C(q,q) ∈ R N ×N the Coriolis and centripetal matrix, and F x ∈ R 6 is the external force and torque applied to the end-effector by the user. The control signal u T is mapped in the joint space by the Jacobian transpose. Note that the task space inertia matrix Λ x is positive definite, and the matrixΛ x − 2C x is skew symmetric. Employing (17) in (18), Define the state vector s = [v T x T ε] T ∈ (R 6 × T × R); the closed-loop system (1), (16), and (19) is written in compact form asṡ with u T (x, v) being the control signal defined in (17). Theorem 1: Consider the initial value probleṁ where Ω = R 6 × Ω 1 × R, with h(s, F x ) as defined in (20) and Ω 1 as defined in (10). The following statements hold.
(i) Under the exertion of an external input F x of bounded energy, the system (21) is strictly output passive with respect to v, the state s is bounded, and x does not escape . Proof: (i) By definition, h(s, F x (t)) is continuous in t and locally Lipschitz with respect to s. Owing to [61, Th. 3.1], there exists a time instance τ > t 0 ,, such that (21) has a unique solution in a maximal time interval [t 0 , τ) with τ ∈ (t 0 , ∞), i.e., s(t) ∈ Ω for all t ∈ [t 0 , τ). Consider the following candidate Lyapunov-like function: Taking its time derivative for all t ∈ [t 0 , τ), solving (19) with respect to Λ xv and employing the skew-symmetric property oḟ Λ x − 2C x and (16),V becomeṡ where λ(D) ∈ R denotes the minimum eigenvalue of D. To derive (23), we have further utilized the fact that kJ T T (ψ * )ζ ≥ 0. Hence, (21) is strictly output passive for all t ∈ [t 0 , τ), with respect to v [61]. By completing the squares in (23), we geṫ Integrating (24), we get The integral term of (25) is bounded, owing to the fact that F x is of bounded energy, as it represents the force applied by the human to guide the robot. Thus, ε and v are bounded for all t ∈ [t 0 , τ). Stated otherwise, there exist compact sets Ω ε and Ω v such that ε(t) ∈ Ω ε ⊂ R and v(t) ∈ Ω v ⊂ R 6 for all t ∈ [t 0 , τ). As a consequence, there exists a positive constant ε such that 1 2 kε 2 ≤ ε, for all t ∈ [t 0 , τ), which for the logarithmic function defined in (14) yields ψ * (x) < T −1 ( 2 k ε) < 1, for all t ∈ [t 0 , τ), i.e., s(t) evolves within a compact subset of Ω for all t ∈ [t 0 , τ). Finally, using [61, Th. 3.3], we can conclude that τ can be extended to ∞.
(ii) Employing the LaSalle invariance principle for F x = 0 6 , it is easy to show that the state will converge to s d , from (23).
Remark 3: The above theoretical analysis is given for the nonredundant case. However, the passivity property can be preserved in the redundant case by injecting a damping term in the joint space. In particular, the proposed controller for the redundant case is given by u q = J T (q)u T − D qq , with u T given by (17) and D q ∈ R N ×N being a positive diagonal damping matrix. To show passivity in this case, we consider the following storage function V q (q,q) = 1 2q T Λ(q)q + 1 2 kε 2 (q). It is not difficult to verify that its time derivative satisfies the following inequality:

VI. EXPERIMENTAL EVALUATION
To demonstrate the effectiveness of the proposed methodology, two experimental scenarios were tested using the seven-dof KUKA LWR4+ robotic manipulator. The first experimental scenario consists of a surface cutting (similar to Example II.1), while the second is a folding assembly scenario. The proposed methodology was implemented in C++ utilizing the FRI library with control frequency f s = 1000 Hz.
Two groups of 17 subjects each participated in the experiments. All participants were instructed to minimize the time of completion and the path distance. Initially, every subject was given time to get familiar with each of the two compared methods before running the experiment. In each experiment, the sequence of executing the compared methods was inverted for each subgroup of eight and nine participants. The main purpose of the comparative experimental evaluation is to demonstrate the controller's efficiency in terms of cognitive and physical load. Toward this direction, the following two quality metrics were used, common to both experiments: 1) the duration of task completion T f , which is measured as the time of physical interaction, and 2) the total energy transferred (E) from the user to the robot and vice versa during the interaction, which was calculated as E = |v T F x |. The utilization of the duration of task completion as an indirect metric of the cognitive load of the user is based on Fitts' law [62] where I p ∈ R + [bits/s] is the unknown but constant (within the short duration of the experiments) information capacity of the  motor system of the subject and I d ∈ R + [bits] is the information required to be processed for the accomplishment of the task, i.e., the task's difficulty and T f is the minimum duration required by the specific subject to accomplish the task. The purpose of instructing the subjects to minimize the time of completion ensures that the information process rate will remain close to I p and the total time duration is representative of the task's difficulty I d . For instance, the threading of a needle requires more time than a regular peg-in-a-hole task when executed by the same user, as it is more cognitive demanding.

A. Cutting Task
First, the proposed methodology is tested in a master-slave setup, in which the user is physically interacting with the master manipulator and the cutting takes place in a virtual environment by the slave manipulator. A seven-dof KUKA LWR4+ arm is used as the master device and a virtual visualization of a similar arm as the slave device (see Fig. 3). Both the slave arm and the virtual scene are visualized utilizing Rviz (embedded in the ROS framework). The virtual scene includes a spherical surface S α with center p c = [0.0 0.5585 − 0.0869] T m and the radius is r = 0.2 m and a desired position path C α on it [see Fig. 4(a)]. The desired position path is defined as an arc of length 0.2πr on the spherical surface [green line in Fig. 4(a)], which can be described, similarly to Example II.1, as Note that the orientation is designed so that the cutting tool attached to the slave arm will always be perpendicular to the surface. Before the initiation of the experiment, the robotic manipulator moves autonomously to the initial configuration, which is q  Fig. 4(b)]. For comparison purposes, the full 6-D DC in translation and orientation [41] was also implemented. In this method and the proposed one, the nearest desired pose is found by utilizing the Levenberg-Marquardt optimization algorithm with boundary constraints 1 to solve the optimization problem given in (11). The proposed method is implemented as given in Remark 3, and the parameters used for the proposed method are ϑ m = 5.0 • and r m = 0.01˜m reflecting the task accuracy requirements in position and orientation, respectively, D = 1.1I 6 , k = 0.5, and α = 0.05 that correspond to a curve length of ±6.2 cm around the previous nearest point and, hence, specifies the optimization window at each step. Joint damping parameter D q tuning was performed to minimize the user's effort. It was found that the required joint damping injection in this robot is negligible owing to the sufficient inherent joint viscous friction, which is equivalent to a damping injected by the control action. For the DC method, the following parameter values were used: .0 N, τ c = 1.6 N · m, and θ = 0.52 rad. The nomenclature of the parameters of the DC methodology follows the symbolism of [41].
Additional quality metrics regarding the mean absolute error norm from the path in position e p and the mean relative angle of the end-effector from the desired orientation ϑ e are utilized in this experiment. Boxplots of the statistical results are shown in Fig. 5. More specifically, in Fig. 5(a) and (b), the mean position and angle error is depicted, respectively, in which the statistical difference between the proposed method (Guaranteed enforcement of Active Constraints -GAC) and DC is clearly shown, with GAC showing higher accuracy than DC. In particular, the mean error norm in translation is 2.5 mm for GAC and 15.8 mm for DC, while the mean angular error is 0.03 1 [Online]. Available: http://users.ics.forth.gr/∼lourakis/levmar/   Fig. (c) and (d), the energy and the total duration are shown, respectively.
A pairwise t-test was performed to assess whether there is a significant difference between the two methods based on the above defined metrics. Results are shown in Table II. Regarding the mean position error, the angular error, and the total energy, the proposed controller significantly outperforms the DC method, as indicated by the low p-values (< 0.05). The values of the effect size showing the magnitude of the difference between the two methods in standard deviation units range from 1.0896 to 3.1739, which could be characterized as large. No such conclusion can be drawn for the total time duration.

B. Folding Assembly Task
To validate the applicability and effectiveness of the proposed method in a real task, the case of kinesthetic teaching of the robot for a folding assembly is also tested. More specifically, a twopart folding assembly is considered, between a mobile phone case and a printed circuit board (PCB) (A and B, respectively, in Fig. 6). The PCB is firmly grasped by a two-finger gripper attached to a KUKA LWR4+ manipulator. In this scenario, the phone case is considered static and attached to a supporting surface.
The task requires first the approaching of the PCB toward the case and its contact with it and second the rotation of the PCB around the contact line, to accomplish a folding motion. Snapshots of the cooperative folding assembly procedure are shown in Fig. 7. It is assumed that the contact line (green line in Fig. 7) and the supporting surface are known, e.g., they can be identified by robotic vision. To facilitate the insertion of the USB connector into the phone case USB hole (Fig. 6, marked in yellow), it is required the PCB to contact the phone case on the contact line with an inclination of 70 • from the supporting surface, as shown in Fig. 7(a). Although it is assumed that both the contact line and the supporting surface are identified with relatively small errors, the opposite is considered for the relative PCB-gripper pose, which may well arise owing to initial grasp error of the PCB or by the slippage of the object during its transfer, as shown in Fig. 8.
The task is divided into two phases. In the first phase, the user must guide the robot's end-effector so that the left edge of part B (see Fig. 6) contacts the left edge of part A from within the case, as shown in Fig. 7(a)-(c) and (f)-(h). To accomplish this phase, the PCB's USB connector (marked in yellow in part B of Fig. 6) should be inserted into the case USB hole (also marked in yellow in part A of Fig. 6). After contact establishment, the second phase is initiated, where a rotation of the PCB around the contact line axis [see Fig. 7(c)-(e) and (h)-(j)] is required to accomplish the folding assembly task, as shown in Fig. 7(e) and (j). To switch from phase 1 to phase 2, an interface with a keyboard input is provided to the user. Each phase is characterized by different desired poses, which are defined as follows.
Phase 1 (contact): Owing to the given geometry of the opposing fingers and the planar shape of the PCB, the possible errors are restricted to the PCB plane regarding translation, and around the normal, regarding orientation, as shown in Fig. 8. Hence, the desired position is defined as a 2-D plane (yellow plane in Fig. 7), including the contact line (green line in Fig. 7), having an inclination of 70 • from the supporting surface. Such a task-related knowledge could be provided by an expert via an appropriate interface, i.e., an augmented reality graphical user interface, in which the user could be able to define the inclination of the plane after the contact line was identified by robotic vision. In this way, the interface would appropriately compute and set the parameters of the analytic expression of the task, according to the selection of the expert. Therefore, the desired position is described by To remain within the workspace of the task, we set σ 1 = 0, σ 1 = 2, σ 2 , σ 2 = −0.6, and σ 2 = 0.6. The desired orientation is described by with Q gc being the great circle of the quaternion sphere S 3 defined in (35) of Appendix A, n being the vector normal to the plane p d (σ 1 , σ 2 ), as shown in Fig. 7(a) Note that in this phase, the nearest pose is analytically computable, since the parametric expression is decoupled between translation and rotation. Hence, the nearest position is found by projecting the position of the robot to the plane, and the nearest orientation is found as explained in Appendix A.
Phase 2: As shown in Fig. 7(c)-(e), the object has to be rotated around axis λ. Hence, the desired position can be described by the following arc around the contact line: where p λ = [0 0.69 0.005] T is an arbitrarily selected point in the contact line and p 0b is the end-effector's position when the phase switches from 1 to 2. The desired orientation can be described by with Q 0b being the end-effector orientation when the phase switches from 1 to 2. For this phase, the dlib library for C++ is utilized with an ending condition of 10 −19 (slope of solution) and 120 maximum iterations, for the nearest pose search. To ensure that the optimization algorithm will yield a result within one control cycle, we measured the time needed for 120 algorithm iterations for this problem and found this time being less than our control cycle of 1 ms. The parameters used for the proposed method are ϑ m = 3.0 • , r m = 0.005 m, D = 1.1I 6 , k = 0.5, α = 0.025, and σ 1 = −π, σ 1 = π.
In this experimental scenario, gravity compensation (GC) control mode is utilized for comparison purposes. The additional quality metrics utilized in this experiment are the path distance in position d p =  Boxplots of the statistical results are shown in Fig. 9. More specifically, in Fig. 9(a) and (b), the path distances traveled in position and orientation are depicted, respectively, in which the statistical difference between the proposed method (GAC) and GC is clearly shown, with GAC yielding a shorter path for the task accomplishment. In Fig. 9(c) and (d), the energy and the total duration are shown, respectively. Let us note that not all  users were able to complete the task with GC, as it was difficult for them to "imagine" or "find out how" to interact with the robot in their own words for the rotation of the end-effector during phase 2. Two of them were given further instructions and managed to complete the task, while the other two abandoned the task. In the latter case, the total time of physical interaction before abandoning the task was considered for the statistical analysis.
Similarly to the first experimental scenario, a pairwise t-test was performed, to assess whether the difference between the two methods is statistically significant, based on the above metrics. The results are given in Table III, based on which we can infer that the mean values of quality metrics differ significantly, with a sufficient level of confidence (p-value below 0.05). Note that in this application scenario, a significant difference in time duration is also found as opposed to the previous scenario, implying the reduction of the difficulty of the task I d , i.e., the user's cognitive load reduction.   Fig. 10 shows the paths of two representative subjects, which were equally familiar with both control approaches. Note the erratic path with the GC controller as opposed to the proposed

VII. DISCUSSION
Given the partial knowledge of the task, the proposed controller enhances the performance of the pHRI task as it was demonstrated in the experimental section; its impact is particularly high for tasks that cannot be accomplished by naive users. However, note that the proposed controller refers to static spatial task knowledge (not explicitly depending on time). Tasks that could, for example, involve objects placed on a conveyor belt are not addressed. Moreover, note that partial knowledge may be associated with uncertainty. In this case, the level of parameterization and the control parameter value selection should reflect this uncertainty; otherwise, the suggested controller may slow or even prevent the users from reaching the correct pose. In fact, given an estimate of the maximum uncertainty level, say, e.g., in the end-effector position, we can select higher parameter values for r m , ϑ m combined with low k values in the controller or introduce a higher degree of parameterization, e.g., in the one-dof case, instead of an oriented path, the task can be modeled as a tube around the nominal one-dof curve. To evaluate the effectiveness of the proposed method in the presence of perception errors for a given estimated uncertainty, we have performed a series of experiments with the cutting task using the same control parameters, but with the desired oriented path being erroneously defined with respect to the actual [by translating p c from 0.2 to 1.5 cm in (27) or rotating v 1 from 1 • to 5 • around the z-axis in (27) and (28)]. Even though the respective plots are not included, the results achieved clearly indicate that both forces and errors are increasing as the unreliability of the constraints increases. However, the user completes the task with less errors when the proposed controller is utilized compared to the ones obtained with the dissipative controller.

VIII. CONCLUSION
In this article, a pHRI controller was proposed to assist the human user toward achieving a multi-dof task, for which partial knowledge was available. The proposed approach utilized a parameterized task model and developed a torque control signal, according to the PPC methodology. The resulted closed-loop system was proven strictly output passive with respect to the end-effector velocity, and it was shown that the translational and orientation deviations were kept less than certain prespecified values. Furthermore, two experimental cases, involving a group of subjects, were conducted with the help of a KUKA LWR4+ robot, showing that the proposed control scheme outperformed the DC, as well as the GC control mode both in terms of total time and energy.
The inverse unit quaternion Q −1 ∈ S 3 is given by Q −1 = [η − T ] T and corresponds to the inverse rotation Rot(k, −ϑ). The quaternion product " * " for two given Q 1 and Q 2 ∈ S 3 corresponding to the rotation matrices R 1 and R 2 , respectively, is defined as follows: where S(.) denotes the skew symmetric matrix. Equation (29) expresses consecutive rotations in the same order as rotation matrices do, R 2 R 1 . The time derivative of the quaternion can be related to the angular velocity vector ω as follows: where The columns of J Q form an orthogonal base of the hyperplane tangential to the unit quaternion sphere at Q. Hence, J T Q J Q = I 3×3 and J T Q Q = 0 3×1 (see, e.g., [63] and [64]). If Q d denotes the desired rotation matrix of the end-effector frame and Q c the current orientation, the orientation error between two frames can be expressed in terms of quaternions as  It is noteworthy that By differentiating η e from (32) and utilizing (33), we geṫ ( 3 4 ) Assume an orientation Q o ∈ S 3 and a unit vectorω o ∈ R 3 expressing a rotation axis. The great circle determined by the intersection of the unit quaternion sphere and the hyperplane passing from the center and spanned by Q o and δQ o = J Q 0ω o , as shown in Fig. 11, corresponds to two complete rotations of Q o ∈ S 3 around the axisω o and is given by the following expression: Given any orientation Q 1 , it is easy to show that the value of γ, which minimizes the geodesic distance between Q 1 and the great circle Q gc (γ; Q o ,ω o ), can be found analytically from γ * = 2atan2(δQ T o Q 1 , Q T o Q 1 ), and consequently, the closest orientation of the great circle to Q 1 is Q gc (γ * ; Q o ,ω o ).
Substituting the partial derivative of (6) into (36) and η e from (32) and utilizing Q T